10 research outputs found

    DeepIPC: Deeply Integrated Perception and Control for an Autonomous Vehicle in Real Environments

    Full text link
    We propose DeepIPC, an end-to-end autonomous driving model that handles both perception and control tasks in driving a vehicle. The model consists of two main parts, perception and controller modules. The perception module takes an RGBD image to perform semantic segmentation and bird's eye view (BEV) semantic mapping along with providing their encoded features. Meanwhile, the controller module processes these features with the measurement of GNSS locations and angular speed to estimate waypoints that come with latent features. Then, two different agents are used to translate waypoints and latent features into a set of navigational controls to drive the vehicle. The model is evaluated by predicting driving records and performing automated driving under various conditions in real environments. The experimental results show that DeepIPC achieves the best drivability and multi-task performance even with fewer parameters compared to the other models. Codes are available at https://github.com/oskarnatan/DeepIPC

    DeepIPCv2: LiDAR-powered Robust Environmental Perception and Navigational Control for Autonomous Vehicle

    Full text link
    We present DeepIPCv2, an autonomous driving model that perceives the environment using a LiDAR sensor for more robust drivability, especially when driving under poor illumination conditions. DeepIPCv2 takes a set of LiDAR point clouds for its main perception input. As point clouds are not affected by illumination changes, they can provide a clear observation of the surroundings no matter what the condition is. This results in a better scene understanding and stable features provided by the perception module to support the controller module in estimating navigational control properly. To evaluate its performance, we conduct several tests by deploying the model to predict a set of driving records and perform real automated driving under three different conditions. We also conduct ablation and comparative studies with some recent models to justify its performance. Based on the experimental results, DeepIPCv2 shows a robust performance by achieving the best drivability in all conditions. Codes are available at https://github.com/oskarnatan/DeepIPCv

    Design and Implementation of Embedded Water Quality Control and Monitoring System for Indoor Shrimp Cultivation

    Get PDF
    Maintaining the water quality of a pond is one of the main issues on aquaculture management. Water quality represents the condition of a pond based on several water parameters such as dissolved oxygen (DO), temperature, pH, and salinity. All of these parameters need to be strictly supervised since it affects the life-sustainability of cultivated organisms. However, DO is said to be the main parameter since it affects the growth and survival rate of the shrimp. Therefore, a water quality control and monitoring system is needed to maintain water parameters at acceptable value. The system is developed on a mini-PC and microcontroller which are integrated with several sensors and actuator forming an embedded system. Then, this system is used to collect water quality data that is consisting of several water parameters and control the DO as the main parameter. In accordance with the stability needs against the sensitive environment, a fuzzy logic-based controller is developed to maintain the DO rate in the water. This system is also equipped with SIM800 module to notice the farmer by SMS, built-in wifi module for web-based data logging, and improved with Android-based graphical user interface (GUI) to perform user-friendly monitoring. From the experiment results, a fuzzy controller that is attached to the system can control the DO at the acceptable value of 6 ppm. The controller is said to have high robustness since its deviation for long-time use is only 0.12 ppm. Another test shows that the controller is able to overcome the given disturbance and easily adapt when the DOñ€ℱs set point is changed.  Finally, the system is able to collect and store the data into cloud storage periodically and show the data on a website

    A New Feature Extraction Algorithm to Extract Differentiate Information and Improve KNN-based Model Accuracy on Aquaculture Dataset

    Get PDF
    In the world of aquaculture, understanding the condition of a pond is very important for a farmer in deciding which action should they take to prevent any bad condition occurred. Condition of a pond can be justified by measuring plenty of water parameters which can be divided into 3 categories that are physical, chemical and biological. The physical parameter is any physical quantity that can be measured in the pond. The chemical parameter is any kind of chemical substances that are dissolved in water. The biological parameter is any organic matter that lives in water. However, all of these parameters are not so distinguishable in representing the condition of a pond. Therefore, the farmer experience difficulties in justifying the condition and taking proper action to their pond. Even with the help of the K-Nearest Neighbors (KNN) algorithm combined with grid search optimization to model the data, the result is still not satisfying where the model only achieve accuracy of 0.701 in leave one out validation. To overcome this problem, a kind of feature extraction algorithm is needed to extract more information and make the data become more differentiate in representing the condition of the pond. With the help of our proposed feature extraction algorithm, optimized KNN can model the data easier and achieve higher accuracy. From the experiment results, the proposed feature extraction algorithm gives an impressive performance where it increases the accuracy to 0.741. A comparison with other feature extraction algorithms such as Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF), and Singular Value Decomposition (SVD) is also conducted to validate how good the proposed feature extraction algorithm is. As a result, the proposed algorithm is surpassing the other algorithms which only achieve the accuracy of 0.707, 0.718, and 0.718, respectively

    Grid SVM: Aplikasi Machine Learning dalam Pengolahan Data Akuakultur

    Get PDF
    Water condition is the main factor that affects the success rate of aquaculture, especially in shrimp cultivation. However, the farmer often experiences difficulties in determining the condition which is stated based on the measurement of various water parameter. Therefore, a proper classification model is needed to help the farmer in classifying the water condition in a pond. By knowing the condition, then proper and correct treatment can be given. In this research, a machine learning algorithm called SVM is used to make a model from an aquaculture dataset. Another processing technique like data normalization and the usage of optimization algorithm named grid search is also performed to improve the modelling result. Furthermore, a test scheme with using k-fold cross-validation is performed to know the performance of the model which is measured by the value of accuracy, precision, recall, f-measure, and AUROC. Then, the SVM model is compared with several models which are made by using another machine learning algorithm such as KNN, CNB, RF, MLP, and LR in order to know the best model to be implemented on cultivation process. From the experiment results, the model which is made with SVM and grid search optimization has the best performance in the validation process with the performance score of 3.54383

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    Semantic Segmentation and Depth Estimation with RGB and DVS Sensor Fusion for Multi-view Driving Perception

    No full text
    In this research, we present a novel deep multi-task learning model to handle the perception stage of an autonomous driving system. The model leverages the fusion of RGB and dynamic vision sensor (DVS) images to perform semantic segmentation and depth estimation in four different perspectives of view simultaneously. As for the experiment, CARLA simulator is used to generate thousands of simulation data for training, validation, and testing processes. A dynamically changing environment with various weather conditions, daytime, maps, and non-player characters (NPC) is also considered to simulate a more realistic condition with expecting a better generalization of the model. An ablation study is conducted by modifying the network architecture to evaluate the influence of the sensor fusion technique. Based on the test result on 2 different datasets, the model that leverages feature maps sharing from RGB and DVS encoders is performing better. Furthermore, we show that our model can inference faster and have a comparable performance against another recent model. Official implementation code is shared at https://github.com/oskarnatan/RGBDVS-fusion

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License
    corecore